L1-regularization path algorithm for generalized linear models

نویسندگان

  • Mee Young Park
  • Trevor Hastie
چکیده

We introduce a path following algorithm for L1-regularized generalized linear models. The L1-regularization procedure is useful especially because it, in effect, selects variables according to the amount of penalization on the L1-norm of the coefficients, in a manner that is less greedy than forward selection–backward deletion. The generalized linear model path algorithm efficiently computes solutions along the entire regularization path by using the predictor–corrector method of convex optimization. Selecting the step length of the regularization parameter is critical in controlling the overall accuracy of the paths; we suggest intuitive and flexible strategies for choosing appropriate values. We demonstrate the implementation with several simulated and real data sets.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Large-scale Inversion of Magnetic Data Using Golub-Kahan Bidiagonalization with Truncated Generalized Cross Validation for Regularization Parameter Estimation

In this paper a fast method for large-scale sparse inversion of magnetic data is considered. The L1-norm stabilizer is used to generate models with sharp and distinct interfaces. To deal with the non-linearity introduced by the L1-norm, a model-space iteratively reweighted least squares algorithm is used. The original model matrix is factorized using the Golub-Kahan bidiagonalization that proje...

متن کامل

Efficient Global Approximation of Generalized Nonlinear l1-Regularized Solution Paths and Its Applications

We consider efficient construction of nonlinear solution paths for general 1-regularization. Unlike the existing methods that incrementally build the solution path through a combination of local linear approximation and recalibration, we propose an efficient global approximation to the whole solution path. With the loss function approximated by a quadratic spline, we show that the solution path...

متن کامل

A Survey of L1 Regression

L1 regularization, or regularization with an L1 penalty, is a popular idea in statistics and machine learning. This paper reviews the concept and application of L1 regularization for regression. It is not our aim to present a comprehensive list of the utilities of the L1 penalty in the regression setting. Rather, we focus on what we believe is the set of most representative uses of this regular...

متن کامل

Lasso Regularization Paths for NARMAX Models via Coordinate Descent

We propose a new algorithm for estimating NARMAX models with L1 regularization for models represented as a linear combination of basis functions. Due to the L1-norm penalty the Lasso estimation tends to produce some coefficients that are exactly zero and hence gives interpretable models. The novelty of the contribution is the inclusion of error regressors in the Lasso estimation (which yields a...

متن کامل

Regularization Path Algorithms for Detecting Gene Interactions

In this study, we consider several regularization path algorithms with grouped variable selection for modeling gene-interactions. When fitting with categorical factors, including the genotype measurements, we often define a set of dummy variables that represent a single factor/interaction of factors. Yuan & Lin (2006) proposed the groupLars and the group-Lasso methods through which these groups...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006